Перевод: со всех языков на все языки

со всех языков на все языки

backpropagation learning rule

См. также в других словарях:

  • Backpropagation — Backpropagation, or propagation of error, is a common method of teaching artificial neural networks how to perform a given task. It was first described by Paul Werbos in 1974, but it wasn t until 1986, through the work of David E. Rumelhart,… …   Wikipedia

  • Oja's rule — Oja s learning rule, or simply Oja s rule, named after a Finnish computer scientist Erkki Oja, is a model of how neurons in the brain or in artificial neural networks change connection strength, or learn, over time. It is a modification of the… …   Wikipedia

  • Delta rule — The delta rule is a gradient descent learning rule for updating the weights of the artificial neurons in a single layer perceptron. It is a special case of the more general backpropagation algorithm. For a neuron with activation function the… …   Wikipedia

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • Neural cryptography — is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially neural network algorithms, for use in encryption and cryptanalysis. Contents 1 Definition 2 Applications 3 Neural key e …   Wikipedia

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

  • Feedforward neural network — A feedforward neural network is an artificial neural network where connections between the units do not form a directed cycle. This is different from recurrent neural networks.The feedforward neural network was the first and arguably simplest… …   Wikipedia

  • Recurrent neural network — A recurrent neural network (RNN) is a class of neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior.Recurrent neural networks must …   Wikipedia

  • Generalized Hebbian Algorithm — The Generalized Hebbian Algorithm (GHA), also known in the literature as Sanger s rule, is a linear feedforward neural network model for unsupervised learning with applications primarily in principal components analysis. First defined in 1989cite …   Wikipedia

  • Connectionism — is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of… …   Wikipedia

  • Synaptic weight — In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used… …   Wikipedia

Поделиться ссылкой на выделенное

Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»